112 research outputs found

    To trust or not to trust an explanation: using LEAF to evaluate local linear XAI methods

    Get PDF
    The main objective of eXplainable Artificial Intelligence (XAI) is to provide effective explanations for black-box classifiers. The existing literature lists many desirable properties for explanations to be useful, but there is no consensus on how to quantitatively evaluate explanations in practice. Moreover, explanations are typically used only to inspect black-box models, and the proactive use of explanations as a decision support is generally overlooked. Among the many approaches to XAI, a widely adopted paradigm is Local Linear Explanations - with LIME and SHAP emerging as state-of-the-art methods. We show that these methods are plagued by many defects including unstable explanations, divergence of actual implementations from the promised theoretical properties, and explanations for the wrong label. This highlights the need to have standard and unbiased evaluation procedures for Local Linear Explanations in the XAI field. In this paper we address the problem of identifying a clear and unambiguous set of metrics for the evaluation of Local Linear Explanations. This set includes both existing and novel metrics defined specifically for this class of explanations. All metrics have been included in an open Python framework, named LEAF. The purpose of LEAF is to provide a reference for end users to evaluate explanations in a standardised and unbiased way, and to guide researchers towards developing improved explainable techniques.Comment: 16 pages, 8 figure

    Streamlining models with explanations in the learning loop

    Get PDF
    Several explainable AI methods allow a Machine Learning user to get insights on the classification process of a black-box model in the form of local linear explanations. With such information, the user can judge which features are locally relevant for the classification outcome, and get an understanding of how the model reasons. Standard supervised learning processes are purely driven by the original features and target labels, without any feedback loop informed by the local relevance of the features identified by the post-hoc explanations. In this paper, we exploit this newly obtained information to design a feature engineering phase, where we combine explanations with feature values. To do so, we develop two different strategies, named Iterative Dataset Weighting and Targeted Replacement Values, which generate streamlined models that better mimic the explanation process presented to the user. We show how these streamlined models compare to the original black-box classifiers, in terms of accuracy and compactness of the newly produced explanations.Comment: 16 pages, 10 figures, available repositor

    Co-design of human-centered, explainable AI for clinical decision support

    Get PDF
    eXplainable AI (XAI) involves two intertwined but separate challenges: the development of techniques to extract explanations from black-box AI models, and the way such explanations are presented to users, i.e., the explanation user interface. Despite its importance, the second aspect has received limited attention so far in the literature. Effective AI explanation interfaces are fundamental for allowing human decision-makers to take advantage and oversee high-risk AI systems effectively. Following an iterative design approach, we present the first cycle of prototyping-testing-redesigning of an explainable AI technique, and its explanation user interface for clinical Decision Support Systems (DSS). We first present an XAI technique that meets the technical requirements of the healthcare domain: sequential, ontology-linked patient data, and multi-label classification tasks. We demonstrate its applicability to explain a clinical DSS, and we design a first prototype of an explanation user interface. Next, we test such a prototype with healthcare providers and collect their feedback, with a two-fold outcome: first, we obtain evidence that explanations increase users’ trust in the XAI system, and second, we obtain useful insights on the perceived deficiencies of their interaction with the system, so that we can re-design a better, more human-centered explanation interface

    Coalition Formation via Negotiation in Multiagent Systems with Voluntary Attacks

    Get PDF
    Argumentation networks are put forward by Dung considering only one kind of attack among argu- ments. In this paper, we propose to extend Dung’s argumentation framework with voluntary attacks in the context of multiagent systems, characterized by the possibility of the attacker to decide whether to attack or not. Enabling voluntary attacks impacts on the acceptability of the arguments in the framework, and therefore it becomes subject of debate between the agents. Agents can negotiate about which subset of voluntary attacks can be raised, and they form coalitions after the negotiation process

    Climate, geographic distribution and viviparity in Liolaemus (Reptilia; Squamata) species: when hypotheses need to be tested

    Get PDF
    La distribución de los reptiles, dada su dependencia de la temperatura, puede verse restringida en función del clima. En particular, se han planteado tres hipótesis que vinculan el clima con la distribución y el viviparismo en las especies de reptiles: i) hipótesis de clima variable, ii) hipótesis del clima frío e iii) hipótesis de manipulación materna. Entre las lagartijas de Sudamérica las especies del género Liolaemus se distribuyen tanto en ambientes cálidos como fríos y más del 50 % de ellas son vivíparas. En este trabajo estudiamos 47 especies de Liolaemus, tomando datos climáticos de sus sitios de colecta, su temperatura de preferencia (Tpref), el coeficiente de variación de la misma (CV) y los límites de tolerancia térmica (TT). Nuestros resultados no apoyan la hipótesis de clima variable, aunque ésta ha sido sustentada en estudios anteriores. Se encontró relación entre viviparismo y altitud, pero no entre viviparismo y las variables térmicas ambientales. Finalmente, las especies vivíparas de Liolaemus mostraron un comportamiento termorregulador más preciso que las especies ovíparas, reforzando la hipótesis de manipulación materna.Reptile’ distributions, under the light of their strong dependence on particular temperature requirements, may be constrained as a consequence of climate. The relationship between reptile viviparity and climate yielded two previously proposed hypotheses (the cold climate and the maternal manipulation hypotheses) that together with the climatic variability hypothesis; theoretically link climate, distribution and reproductive mode. The extensive variation in mean environmental temperature associated with changes in parity strategies has received much attention in this ectothermic group. Among Southamerican lizards, Liolaemus are distributed in cold and warm regions and more than 50 % of described species are viviparous. We studied 47 Liolaemus species considering climatic data where collected, preferred body temperature, its coefficient of variation and their thermal tolerance. Our results do not support the climate variability hypothesis, previously supported in other studies. We found a relationship between viviparity and elevation, but not between viviparity and thermal climatic variables. Finally, viviparous Liolaemus species showed more precise thermoregulatory behavior than oviparous ones, supporting the maternal manipulation hypothesis.Agencia Nacional de Promoción Científica y Tecnológica de Argentina (PICT 06-01205) y Consejo Nacional de Investigaciones científicas y Tecnológicas (PIP 2846 y PIP 6287)

    Runtime Verification Through Forward Chaining

    Get PDF
    In this paper we present a novel rule-based approach for Runtime Verification of FLTL properties over finite but expanding traces. Our system exploits Horn clauses in implication form and relies on a forward chaining-based monitoring algorithm. This approach avoids the branching structure and exponential complexity typical of tableaux-based formulations, creating monitors with a single state and a fixed number of rules. This allows for a fast and scalable tool for Runtime Verification: we present the technical details together with a working implementation
    corecore